A Gentle Introduction to the Em Algorithm Part I Theory

نویسندگان

  • HEMANT D TAGARE
  • Martin Tanner
چکیده

Introduction My aim is to introduce the Expectation Maximization EM algorithm to you especially some of its theory I will skip proofs but I will derive many formulae that have practical use The EM algorithm is iterative and you should be familiar with its convergence properties I will discuss them in detail I will present applications of the EM algorithm to signal and image processing in a companion tutorial called A Gentle Introduction to the EM algorithm Part II Applications There are two excellent books about the EM algorithm Martin Tanner s book is concise and deceptively simple looking Every line in that book counts Read it carefully MacLachlan and Krishnan s book is more comprehensive and has many extensions of the EM algorithm I learned a lot from it I recommend both books to the EM enthusiast Be warned that these books are written by statisticians for statis ticians If you are not a statistician then some of the notation might be unfamiliar I cannot introduce the EM algorithm without discussing parameter estimation So without any more fuss let me review the principles of parameter estimation I will assume that you understand the basic principles of probability prior joint and conditional distributions and the Bayes rule

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Expectation Maximization: a Gentle Introduction

This tutorial was basically written for students/researchers who want to get into first touch with the Expectation Maximization (EM) Algorithm. The main motivation for writing this tutorial was the fact that I did not find any text that fitted my needs. I started with the great book “Artificial Intelligence: A modern Approach”of Russel and Norvig [6], which provides lots of intuition, but I was...

متن کامل

Mixture Models and Expectation-Maximization

This tutorial attempts to provide a gentle introduction to EM by way of simple examples involving maximum-likelihood estimation of mixture-model parameters. Readers familiar with ML paramter estimation and clustering may want to skip directly to Sections 5.2 and 5.3.

متن کامل

Genetic Algorithms and Walsh Functions: Part I, A Gentle Introduction

This paper investigates the application of Walsh functions to the analysis of genetic algorithms operating on different codingfunct ion combinations. Although these analysis tools have been in existence for some time, they have not been widely used. To promote their understanding and use, this paper introduces Bethke's Walshschema transform through the Walsh polynomials. This form of the method...

متن کامل

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the...

متن کامل

Convexity, Maximum Likelihood and All That

This note is meant as a gentle but comprehensive introduction to the expectation-maximization (EM) and improved iterative scaling (IIS) algorithms, two popular techniques in maximum likelihood estimation. The focus in this tutorial is on the foundation common to the two algorithms: convex functions and their convenient properties. Where examples are called for, we draw from applications in huma...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005